172 research outputs found
The Missing Data Encoder: Cross-Channel Image Completion\\with Hide-And-Seek Adversarial Network
Image completion is the problem of generating whole images from fragments
only. It encompasses inpainting (generating a patch given its surrounding),
reverse inpainting/extrapolation (generating the periphery given the central
patch) as well as colorization (generating one or several channels given other
ones). In this paper, we employ a deep network to perform image completion,
with adversarial training as well as perceptual and completion losses, and call
it the ``missing data encoder'' (MDE). We consider several configurations based
on how the seed fragments are chosen. We show that training MDE for ``random
extrapolation and colorization'' (MDE-REC), i.e. using random
channel-independent fragments, allows a better capture of the image semantics
and geometry. MDE training makes use of a novel ``hide-and-seek'' adversarial
loss, where the discriminator seeks the original non-masked regions, while the
generator tries to hide them. We validate our models both qualitatively and
quantitatively on several datasets, showing their interest for image
completion, unsupervised representation learning as well as face occlusion
handling
Maxmin convolutional neural networks for image classification
Convolutional neural networks (CNN) are widely used in computer vision,
especially in image classification. However, the way in which information and
invariance properties are encoded through in deep CNN architectures is still an
open question. In this paper, we propose to modify the standard convo- lutional
block of CNN in order to transfer more information layer after layer while
keeping some invariance within the net- work. Our main idea is to exploit both
positive and negative high scores obtained in the convolution maps. This behav-
ior is obtained by modifying the traditional activation func- tion step before
pooling. We are doubling the maps with spe- cific activations functions, called
MaxMin strategy, in order to achieve our pipeline. Extensive experiments on two
classical datasets, MNIST and CIFAR-10, show that our deep MaxMin convolutional
net outperforms standard CNN
Deformable Part-based Fully Convolutional Network for Object Detection
Existing region-based object detectors are limited to regions with fixed box
geometry to represent objects, even if those are highly non-rectangular. In
this paper we introduce DP-FCN, a deep model for object detection which
explicitly adapts to shapes of objects with deformable parts. Without
additional annotations, it learns to focus on discriminative elements and to
align them, and simultaneously brings more invariance for classification and
geometric information to refine localization. DP-FCN is composed of three main
modules: a Fully Convolutional Network to efficiently maintain spatial
resolution, a deformable part-based RoI pooling layer to optimize positions of
parts and build invariance, and a deformation-aware localization module
explicitly exploiting displacements of parts to improve accuracy of bounding
box regression. We experimentally validate our model and show significant
gains. DP-FCN achieves state-of-the-art performances of 83.1% and 80.9% on
PASCAL VOC 2007 and 2012 with VOC data only.Comment: Accepted to BMVC 2017 (oral
BLOCK: Bilinear Superdiagonal Fusion for Visual Question Answering and Visual Relationship Detection
Multimodal representation learning is gaining more and more interest within
the deep learning community. While bilinear models provide an interesting
framework to find subtle combination of modalities, their number of parameters
grows quadratically with the input dimensions, making their practical
implementation within classical deep learning pipelines challenging. In this
paper, we introduce BLOCK, a new multimodal fusion based on the
block-superdiagonal tensor decomposition. It leverages the notion of block-term
ranks, which generalizes both concepts of rank and mode ranks for tensors,
already used for multimodal fusion. It allows to define new ways for optimizing
the tradeoff between the expressiveness and complexity of the fusion model, and
is able to represent very fine interactions between modalities while
maintaining powerful mono-modal representations. We demonstrate the practical
interest of our fusion model by using BLOCK for two challenging tasks: Visual
Question Answering (VQA) and Visual Relationship Detection (VRD), where we
design end-to-end learnable architectures for representing relevant
interactions between modalities. Through extensive experiments, we show that
BLOCK compares favorably with respect to state-of-the-art multimodal fusion
models for both VQA and VRD tasks. Our code is available at
https://github.com/Cadene/block.bootstrap.pytorch
- …